13 research outputs found

    NimbleCache - low cost, dynamic cache allocation in constrained edge environments

    Get PDF
    Edge computing and caching of data in the Internet such as reduced energy of Things (consumption by IoT end devices and increased availability of data and Quality of Service (QoS). In typical IoT scenarios, edge nodes (gateways) support several end devices, each of which may produce data in different patterns. In addition, data generated by different types of end devices varies in the application QoS requirements while also widely varying in the data access patterns by IoT services. Managing the data storage resources at edge nodes in such scenarios is a difficult task, especially since the edge nodes themselves may have limited computation capability and storage space. In this paper, we propose a dynamic, differentiated edge cache allocation strategy called NimbleCache that has low computational requirements and performs efficient cache allocation at edge nodes. Based on a Mixture Density Network (MDN), NimbleCache allocates varying portions of the edge cache to traffic of different IoT applications to achieve cache hit ratios very close to the target hit ratio. Simulation results show that NimbleCache achieves good average cache hit ratio with l cache space requirement and small computational overhead

    Achieving optimal cache utility in constrained wireless networks through federated learning

    Get PDF
    Edge computing allows constrained end devices in wireless networks to offioad heavy computing tasks or data storage when local resources are insufficient. Edge nodes can provide resources such as the bandwidth, storage and innetwork compute power. For example, edge nodes can provide data caches to which constrained end devices can off-load their data and from where user can access data more effectively. However, fair allocation of these resources to competing end devices and data classes while providing good Quality of Service is a challenging task, due to frequently changing network topology and/or traffic conditions. In this paper, we present Federated learning-based dynamic Cache allocation (FedCache) for edge caches in dynamic, constrained networks. FedCache uses federated learning to learn the benefit of a particular cache allocation with low communication overhead. Edge nodes learn locally to adapt to different network conditions and collaboratively share this knowledge so as to avoid having to transmit all data to a single location. Through this federated learning approach, nodes can find resource allocations that result in maximum fairness or efficiency in terms of the cache hit ratio for a given network state. Simulation results show that cache resource allocation using FedCache results in optimal fairness or efficiency of utility for different classes of data when compared to proportional allocation, while incurring low communication overhead

    Deadline-Aware TDMA Scheduling for Multihop Networks Using Reinforcement Learning

    Get PDF
    Time division multiple access (TDMA) is the medium access control strategy of choice for multihop networks with deterministic delay guarantee requirements. As such, many Internet of Things applications use protocols based on time division multiple access. Optimal slot assignment in such networks is NP-hard when there are strict deadline requirements and is generally done using heuristics that give suboptimal transmission schedules in linear time. However, existing heuristics make a scheduling decision at each time slot based on the same criterion without considering its effect on subsequent network states or scheduling actions. Here, we first identify a set of node features that capture the information necessary for network state representation to aid building schedules using Reinforcement Learning (RL). We then propose three different centralized approaches to RL-based TDMA scheduling that vary in training and network representation methods. Using RL allows applying diverse criteria at different time slots while considering the effect of a scheduling action on meeting the scheduling objective for the entire TDMA frame, resulting in better schedules. We compare the three proposed schemes in terms of how well they meet the scheduling objectives and their applicability to networks with memory and time constraints. One of the schemes proposed is RLSchedule, which is particularly suited to constrained networks. Simulation results for a variety of network scenarios show that RLSchedule reduces the percentage of packets missing deadlines by up to 60% compared to the best available baseline heuristic

    Distributed Fault Tolerance for WSNs with Routing Tree Overlays

    No full text
    Abstract-WSNs are inherently power constrained and are often deployed in harsh environments. As such, node death is a possibility that must be considered while designing protocols for such networks. Rerouting of data is generally necessary so that data from the descendant nodes of the dead node can reach the sink. Since slot allocation in TDMA MAC protocols is generally done based on the routing tree, all the nodes must switch to the new routing tree to avoid collisions. This necessitates disseminating the fault information to all the nodes reliably. We propose a flooding algorithm for disseminating fault info to the network reliably even in a lossy channel. Simulation results show that the proposed flooding scheme consumes lesser energy and converges faster than a simple flooding scheme. Rerouting the data may result in increased TDMA schedule length. The energy expenditure of the newly assigned parents also increases because they have to relay data from more children than before. We propose two distributed parent assignment algorithms in this paper. The first algorithm minimizes the change in the TDMA schedule and the second algorithm balances the load among the newly assigned parents. Simulation of random topologies shows that the increase in the TDMA frame length is lesser than that with random parent assignment when the first algorithm is used. We also observe that the lifetime of the most energy constrained node (after fault recovery) is longer when the second algorithm is used than that when random parent assignment is done

    DGRAM: A Delay Guaranteed Routing and MAC Protocol for Wireless Sensor Networks

    No full text
    This paper presents an integrated MAC and routing protocol called Delay Guaranteed Routing and MAC (DGRAM) for delay sensitive wireless sensor network (WSN) applications. DGRAM is a TDMA-based protocol designed to provide deterministic delay guarantee in an energy efficient manner. The design is based on slot reuse to reduce the latency of a node in accessing the medium, while ensuring contention free medium access. The transmission and reception cycles of nodes are carefully computed so that data is transported from the source towards the sink while the nodes could sleep at the other times to conserve energy. Thus, routes of data packets are integrated into DGRAM. We provide a detailed design of time slot assignment and delay analysis of the protocol. One major advantage of DGRAM over other TDMA protocols is that the slot assignment is done in a fully distributed manner making the DGRAM network self-configuring. We have simulated DGRAM using ns2 simulator and compared the results with those of SMAC for a similar network. Simulation results show that the delay experienced by data packets is always less than the analytical delay bound for which the protocol is designed. As per simulation results, the average energy consumption does not change as the event rate changes, and is less than that of SMAC. This characteristic of DGRAM provides flexibility in choosing various operating parameters without having to worry about energy efficiency. 1

    T-Move: A Light-Weight Protocol for Improved QoS in Content-Centric Networks with Producer Mobility

    No full text
    Recent interest in applications where content is of primary interest has triggered the exploration of a variety of protocols and algorithms. For such networks that are information-centric, architectures such as the Content-Centric Networking have been proven to result in good network performance. However, such architectures are still evolving to cater for application-specific requirements. This paper proposes T-Move, a light-weight solution for producer mobility and caching at the edge that is especially suitable for content-centric networks with mobile content producers. T-Move introduces a novel concept called trendiness of data for Content-Centric Networking (CCN)/Named Data Networking (NDN)-based networks. It enhances network performance and quality of service (QoS) using two strategies—cache replacement and proactive content-pushing for handling producer mobility—both based on trendiness. It uses simple operations and smaller control message overhead and is suitable for networks where the response needs to be quick. Simulation results using ndnSIM show reduced traffic, content retrieval time, and increased cache hit ratio with T-Move, when compared to MAP-Me and plain NDN for networks of different sizes and mobility rates

    DGRAM: A Delay Guaranteed Routing and MAC protocol for wireless sensor networks

    No full text
    This paper presents an integrated MAC and routing protocol called Delay Guaranteed Routing and MAC (DGRAM) for delay sensitive wireless sensor network (WSN) applications. DGRAM is a TDMA-based protocol designed to provide deterministic delay guarantee in an energy efficient manner. The design is based on slot reuse to reduce the latency of a node in accessing the medium, while ensuring contention free medium access. The transmission and reception cycles of nodes are carefully computed so that data is transported from the source towards the sink while the nodes could sleep at the other times to conserve energy. Thus, routes of data packets are integrated into DGRAM. We provide a detailed design of time slot assignment and delay analysis of the protocol. One major advantage of DGRAM over other TDMA protocols is that the slot assignment is done in a fully distributed manner making the DGRAM network self-configuring. We have simulated DGRAM using ns2 simulator and compared the results with those of SMAC for a similar network. Simulation results show that the delay experienced by data packets is always less than the analytical delay bound for which the protocol is designed. As per simulation results, the average energy consumption does not change as the event rate changes, and is less than that of SMAC. This characteristic of DGRAM provides flexibility in choosing various operating parameters without having to worry about energy efficiency.© IEE

    DGRAM: A Delay Guaranteed Routing and MAC Protocol for Wireless Sensor Networks

    No full text
    corecore